News

Trust and Ethics in Technology Part 1: Ethical Intelligence - Bristol Technology Festival 2019
07-11-2019

This week, our student engineer Luke attended a talk on Trust and Ethics in Technology, held at the Bristol Museum and Art Gallery. This was part of the Bristol Technology Festival, held in early November. The event was hosted by Lanciaconsult, under the banner of #beingrelevant2019 (see their site).

"This particular talk raises the issue of trust.
Trust, until recently has never been as contentious an issue with the public in the technology field.

"This particular talk raises the issue of trust. Trust, until recently has never been as contentious an issue with the public in the technology field. It's persisted for as long as companies have held your data in the digital age, consumer ignorance toward safeguards meant that they were not necessarily the go-to choice when deciding to use a service.

In the age of whistleblowers and hacking scandals, there's been a tighter spotlight thrust upon the issues of safety and trust in the platforms that we use, and the continued development of machine learning and our goal to strive toward creating AI, ethics too has been brought into question.

The talks on offer focused around these core themes:

  1. Artificial Intelligence

  2. Autonomous Driving

  3. Financial Services

  4. Identity and Security

  5. Ethics

  6. Military and Government

Whilst there's a lot of ground to cover with the nearly 4 hour event, I'm going to focus on two talks in particular : Ethical Intelligence and the Military Application of AI.

Beginning the first round of talks was Olivia Gambelin, the Co-Founder and CSO of Ethical Intelligence. Olivia has previous experience in Silicon Valley and after research with EU Institutions, completed her MSc in AI Ethics at the University of Edinburgh.

She began by listing a common set of ethical values: Respect dignity, Connect openly, Care and wellbeing, and Protect social values.

That seemed rather straight forward, but the tone quickly shifted into why these can often come into conflict with one another. It transpires that these become issues as the University of Edinburgh found out, when they backed several projects to build an AI algorithm to detect mental health issues.


This was contentious, as the result is to ensure safety and security for all staff and students on and off campus, but to do so would infringe on these ethics.

You can't on the one hand monitor an individual's every move, as this would infringe upon their dignity, but abstaining from this would violate their care and wellbeing. The emotional connections between these values are the problem, with the thought toward a machine learning your behavior patterns, with or without what could be a flawed or compromised framework behind it is worrying at best, scary at worst.

The solution, then, is to take a step back. To remove the human emotion that compromises those values. This message at first was at odds with the content, but then I began to see what her message was.

There needs to be a clear and creative analysis in order to fit the framework and in the processing of data. There needs to be an acknowledgement that values will be compromised. One size does not fit all.

How do you balance the charged emotion that comes from dealing with mental health issues, how can this avoid conflict? Removing the decision from people, taking on board the evidence statistically, not empirically she says, will allow the user of the algorithm to take a messaged approach.

And then it clicked for me. This algorithm is not meant to replace or do the job of a human. It's meant as a tool to help, so that the observations recorded, the patterns of behavior and characteristics associated with mental illness can be analysed objectively, with the final evidence presented clearly for the human to make an informed decision.

In many ways, this reminded me of an issue that care worker, lawyers, and others in the profession of people suffer from: emotional burnout. Having to witness and process unsavoury information an almost daily basis leads to ill-informed judgments, apathy and depression.

This tool would be a step to shoulder that burden from these professionals.

I only hope that the human element it aims to replace, isn't the part that's required to make informed decisions."